-
Notifications
You must be signed in to change notification settings - Fork 12
tests: dt_binding_check: new test #65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
We currently "rely" on Rob Herring's mail bot to provide sanity checking on patches that touch device tree bindings: https://lore.kernel.org/netdev/[email protected] and that is sufficient in "public" settings (patches sent to [email protected] and [email protected] also copied). But in "private" settings (i.e. individual developers encouraged to do sanity checking on their own, for example by running ingest_mdir), we quickly find that Rob's mail bot is apparently closed source, and it has happened more than once (at least to me) for me to miss newly introduced dt_binding_check errors even if I did the due dilligence of running that test locally. So an automated check would be good to have. The justification for including it in NIPA is that while device tree maintainers review binding patches, they get applied to the subsystem tree (in this case netdev). Furthermore, I don't see the overlap with Rob Herring's mail bot as being a strong reason against such checks in NIPA, similar to how the existence of Intel's kbuild test robot does not preclude NIPA from having build tests. In terms of implementation, "make dt_binding_check" does not always nicely print "error" or "warning" on the lines with issues. Furthermore, the errors are multi-line. So instead of filtering for error lines, here we filter out the "normal" lines, which contain things such as "make", "SCHEMA", "CHKDT", "LINT", "DTEX", "DTC" at the beginning. Signed-off-by: Vladimir Oltean <[email protected]>
|
I've marked this as "draft" because it doesn't run "pip3 install dtschema --upgrade" - I am using through ingest_mdir.py so that would translate into a system requirement. I'm not even sure where that would go - in my case I am updating dtschema prior to each build, due to the high frequency of changes. |
|
I have no experience with checks which require some sort of external dependency to be updated. Can't think of a reason why |
|
Well, one reason against running |
|
Is there a way we can check if dtschema is already installed by the user? As implicit agreement to update it. |
|
So Secondly, updating the same local package during the build still risks corrupting the dtschema state for other builds. I still think that each build needs to have its own. |
Could we not simply have a CI and anybody else could periodically run And the good thing is that this file can also be used with venv. |
|
Eh, I was hoping Edit: there is --require-virtualenv. Are you saying with the "secondly" that that's still not good enough because user may not expect update? |
Sorry, but I don't understand the intersection between what problems does We don't need specific I'm not running pw_poller.py (and maybe that's part of the problem in understanding), but AFAIU based on the README, there is no time in between builds where such package updates could take place. Assuming a CI running at full steam, this could only be done in Python code by creating a venv for each Tester object (worker), which is isolated from every other worker. |
No. Updating the user's packages neither helps nor inherently breaks things. I'm just saying that the CI probably needs to manage a venv for each worker, since there is otherwise no right time to update this package if installed either system-wide or at the user level, because there might be multiple workers pipelined, and there's no "RCU" for package management. |
|
Ack, I'll admit I don't know much about venv :S I treat Python as next-gen bash.. So I'ma stop talking whatever you guys decide is right is fine by me. |
|
@vladimiroltean to me, I think the dependences should not be managed by the test scripts. How often does Maybe this dependence can simply be managed like with the others: updated when needed, e.g. in "maintenance mode", when the tests scripts are being updated. Or maybe we want something more explicit with a |
As mentioned since the first reply, for me everything other than
Looking at https://pypi.org/project/dtschema/#history, there were 45 updates in 5 years. It depends how often are patchwork worker interruptions permitted, which I don't know, as I don't manage this kind of instance.
Maybe! Does a well-defined "maintenance mode" process exist currently? For example when NIPA itself needs to pull in the latest changes?
I think requirements.txt is a complication which detracts from the main goal, since latest dtschema should always be fine. |
I think what was unclear was how frequently this dependence has to be updated, and what happens if the last stable version is not used. If the dependence doesn't have to be updated "regularly", I guess it can be managed by the CI with the other dependences. @kuba-moo would it be OK like that?
@vladimiroltean do you know if the last stable version always has to be used? If the CI doesn't have to be frequently nor urgently updated, I guess there is no need to have an exception for
I don't know exactly how @kuba-moo manages that, but maintenance time is needed to update NIPA scripts, and some dependences from time to time, e.g. iproute2.
Sure, I'm fine without it. This file is also useful to simply list Python dependences: so someone trying to validate patches can know in advance what to install, instead of waiting for errors and trying to understand what is missing. |
robherring
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mail bot is open source. The CI job is here: https://gitlab.com/robherring/dt-review-ci/
The mailing part is separate and is here (pw-check-fails): https://gitlab.com/robherring/pw-utils
|
|
||
| # Define the filter pattern to exclude build noise from the make output. | ||
| # We only want to see the actual error/warning lines. | ||
| FILTER_PATTERN="^(make\[| CHKDT| DTC| DTEX| HOSTCC| HOSTLD| LEX| LINT| SCHEMA| YACC|$)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps use 'make -s' instead. Then you need all lines.
I'm happy to adjust the warnings format if that helps.
We currently "rely" on Rob Herring's mail bot to provide sanity checking on patches that touch device tree bindings:
https://lore.kernel.org/netdev/[email protected]
and that is sufficient in "public" settings (patches sent to [email protected] and [email protected] also copied).
But in "private" settings (i.e. individual developers encouraged to do sanity checking on their own, for example by running ingest_mdir), we quickly find that Rob's mail bot is apparently closed source, and it has happened more than once (at least to me) for me to miss newly introduced dt_binding_check errors even if I did the due dilligence of running that test locally. So an automated check would be good to have.
The justification for including it in NIPA is that while device tree maintainers review binding patches, they get applied to the subsystem tree (in this case netdev).
Furthermore, I don't see the overlap with Rob Herring's mail bot as being a strong reason against such checks in NIPA, similar to how the existence of Intel's kbuild test robot does not preclude NIPA from having build tests.
In terms of implementation, "make dt_binding_check" does not always nicely print "error" or "warning" on the lines with issues. Furthermore, the errors are multi-line. So instead of filtering for error lines, here we filter out the "normal" lines, which contain things such as "make", "SCHEMA", "CHKDT", "LINT", "DTEX", "DTC" at the beginning.